10 research outputs found

    Compute and Antitrust

    Get PDF

    A solution scan of societal options to reduce transmission and spread of respiratory viruses: SARS-CoV-2 as a case study.

    Get PDF
    Societal biosecurity - measures built into everyday society to minimize risks from pests and diseases - is an important aspect of managing epidemics and pandemics. We aimed to identify societal options for reducing the transmission and spread of respiratory viruses. We used SARS-CoV-2 (severe acute respiratory syndrome coronavirus 2) as a case study to meet the immediate need to manage the COVID-19 pandemic and eventually transition to more normal societal conditions, and to catalog options for managing similar pandemics in the future. We used a 'solution scanning' approach. We read the literature; consulted psychology, public health, medical, and solution scanning experts; crowd-sourced options using social media; and collated comments on a preprint. Here, we present a list of 519 possible measures to reduce SARS-CoV-2 transmission and spread. We provide a long list of options for policymakers and businesses to consider when designing biosecurity plans to combat SARS-CoV-2 and similar pathogens in the future. We also developed an online application to help with this process. We encourage testing of actions, documentation of outcomes, revisions to the current list, and the addition of further options

    The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation

    No full text
    This report surveys the landscape of potential security threats from malicious uses of AI, and proposes ways to better forecast, prevent, and mitigate these threats. After analyzing the ways in which AI may influence the threat landscape in the digital, physical, and political domains, we make four high-level recommendations for AI researchers and other stakeholders. We also suggest several promising areas for further research that could expand the portfolio of defenses, or make attacks less effective or harder to execute. Finally, we discuss, but do not conclusively resolve, the long-term equilibrium of attackers and defenders.Future of Humanity Institute, University of Oxford, Centre for the Study of Existential Risk, University of Cambridge, Center for a New American Security, Electronic Frontier Foundation, OpenAI. The Future of Life Institute is acknowledged as a funder

    Activism by the AI Community: Analysing Recent Achievements and Future Prospects

    No full text
    The artificial intelligence (AI) community has recently engaged in activism in relation to their employers, other members of the community, and their governments in order to shape the societal and ethical implications of AI. It has achieved some notable successes, but prospects for further political organising and activism are uncertain. We survey activism by the AI community over the last six years; apply two analytical frameworks drawing upon the literature on epistemic communities, and worker organising and bargaining; and explore what they imply for the future prospects of the AI community. Success thus far has hinged on a coherent shared culture, and high bargaining power due to the high demand for a limited supply of AI 'talent'. Both are crucial to the future of AI activism and worthy of sustained attention

    Filling gaps in trustworthy development of AI.

    No full text
    Incident sharing, auditing, and other concrete mechanisms could help verify the trustworthiness of actors

    Toward Trustworthy AI Development: Mechanisms for Supporting Verifiable Claims

    Get PDF
    With the recent wave of progress in artificial intelligence (AI) has come a growing awareness of the large-scale impacts of AI systems, and recognition that existing regulations and norms in industry and academia are insufficient to ensure responsible AI development. In order for AI developers to earn trust from system users, customers, civil society, governments, and other stakeholders that they are building AI responsibly, they will need to make verifiable claims to which they can be held accountable. Those outside of a given organization also need effective means of scrutinizing such claims. This report suggests various steps that different stakeholders can take to improve the verifiability of claims made about AI systems and their associated development processes, with a focus on providing evidence about the safety, security, fairness, and privacy protection of AI systems. We analyze ten mechanisms for this purpose--spanning institutions, software, and hardware--and make recommendations aimed at implementing, exploring, or improving those mechanisms
    corecore